LCFS Simulator

https://killer.sh

 

 

Instructions

  1. You have access to 5 servers:

  • terminal (default)

  • web-srv1

  • app-srv1

  • data-001

  • data-002

  1. The server on which to solve a question is mentioned in the question text. If no server is mentioned you'll need to create your solution on the default terminal

  2. If you're asked to create solution files at /opt/course/*, then always do this on your main terminal

  3. You can connect to each server using ssh, like ssh web-srv1

  4. All server addresses are configured in /etc/hosts on each server

  5. Nested ssh is not possible: you can only connect to each server from your main terminal

  6. It's not possible to restart single servers. If deeper issues or misconfigurations occurred then the only solution might be to restart the complete simulator. This is possible using top menu by selecting "Restart Session"

  7. This simulator might not contain all LFCS exam topics. Attendees are still required to learn and study the complete curriculum

 

 

 

Question 1 | Kernel and System Info

 

Write the Linux Kernel release into /opt/course/1/kernel.

Write the current value of Kernel parameter ip_forward into /opt/course/1/ip_forward.

Write the system timezone into /opt/course/1/timezone.

 

ℹ️ If no server is mentioned in the question text, you'll need to create your solution on the default terminal

 

Answer:

The files should look like:

 

 

Question 2 | CronJobs

 

On server data-001, user asset-manager is responsible for timed operations on existing data. Some changes and additions are necessary.

Currently there is one system-wide cronjob configured that runs every day at 8:30pm. Convert it from being a system-wide cronjob to one owned and executed by user asset-manager. This means that user should see it when running crontab -l.

Create a new cronjob owned and executed by user asset-manager that runs bash /home/asset-manager/clean.sh every week on Monday and Thursday at 11:15am.

 

ℹ️ You can connect to servers using ssh, for example ssh data-001

 

Answer:

Step 1

Here we should move a cronjob from system-wide to user asset-manager. First we check out that cronjob:

We go ahead and cut this line from that file, or copy it and remove it later! Next we're going to add it to the users cronjobs:

ℹ️ Here we shouldn't specify this any longer!

The system-wide cronjobs in /etc/crontab always specify the user that executes the command. Now it's no longer necessary.

After saving the file we should be able to:

Now we see that migrated cronjob!

 

Step 2

For the next step we should add a new cronjob. We can just copy and then change the existing one to our needs:

Save and check:

For guidance check the comments in /etc/crontab, they're really useful. Instead of numbers for the days we can also use the actual names of days:

The files for user crontabs are stored at location /var/spool/cron/crontabs and user root can access those.

 

 

Question 3 | Time synchronisation Configuration

 

Time synchronisation configuration needs to be updated:

  1. Set 0.pool.ntp.org and 1.pool.ntp.org as main NTP servers

  2. Set ntp.ubuntu.com and 0.debian.pool.ntp.org as fallback NTP servers

  3. The maximum poll interval should be 1000 seconds and the connection retry 20 seconds

 

Answer:

 

ℹ️ Use man timesyncd.conf for help

 

A good idea would probably to take a look at the current situation:

Here we see for example the current local time and timezone. Let's open the configuration:

We see three german NTP servers currently configured via setting NTP.

 

Test NTP servers

We can test single NTP servers manually for a sense of certainty:

Above we see one successful request and one to www.google.de that failed. This is correct because the Google web-domain doesn't provide a NTP service.

 

Step 1: Main servers

We adjust the config:

 

Step 2: Fallback servers

Often times various settings are already included in the timesyncd.conf but commented out. Here it seems that we've to work with a pretty clean file. Hence we can use man timesyncd.conf for help:

 

Step 3: Remaining settings

Here we also use the man pages as help:

 

Final: Restart service

Now we restart the service:

Good to check the service status for warnings or errors:

Status output looking good. In the logs above we can see which NTP server was used for synchronisation. We could also check the logs with:

Server 0.pool.ntp.org was used here which means our configuration change worked.

 

 

Question 4 | Environment Variables

There is an existing env variable for user candidate@terminal: VARIABLE1=random-string, defined in file .bashrc. Create a new script under /opt/course/4/script.sh which:

  1. Defines a new env variable VARIABLE2 with content v2, only available in the script itself

  2. Outputs the content of the env variable VARIABLE2

  3. Defines a new env variable VARIABLE3 with content ${VARIABLE1}-extended, available in the script itself and all child processes of the shell as well

  4. Outputs the content of the env variable VARIABLE3

 

ℹ️ Do not alter the .bashrc file, everything needs to be done in the script itself

 

Answer:

Well, let's check the existing variable and its content as mentioned:

How the variable's value defined? Let's check the .bashrc file:

Now let's create a script which will define a new env variable called VARIABLE2 with content v2:

We give it a try, it should output the variable content, but shouldn't make it available (export) afterwards:

Finally, we define the third environment variable called VARIABLE3 within the same script

Run and check the result

What's the difference between export and not using export? Let's demonstrate it:

 

 

Question 5 | Archives and Compression

 

There is archive /imports/import001.tar.bz2 on server data-001. You're asked to create a new gzip compressed archive with its raw contents.

Store the new archive under /imports/import001.tar.gz. Compression should be the best possible, using gzip.

To make sure both archives contain the same files, write a list of their sorted contents into /imports/import001.tar.bz2_list and /imports/import001.tar.gz_list.

 

ℹ️ Do not modify or delete the original archive import001.tar.bz2

 

 

Answer:

We connect to data-001 and have a look at the folder:

 

Possibility 1 (use the tar layer)

We extract the bzip2 archive and receive an uncompressed tar archive:

ℹ️ We can install bzip2 via the package manager if not available

Every tar archive contains a "tar" data layer. This can then be further compressed with various compression algorithms. Here we can now go ahead and create a new gzip compression from the tar layer:

 

Possibility 2 (completely extract and pack again)

We extract the files into a new subfolder:

Now we create the new required archive

Using the GZIP env variable is deprecated, instead we could use:

We should see:

 

Finally

We ensure that both archives contain the same files and structure:

To compare further we could use cat import001.tar.bz2_list | sha512sum and compare the hashes.

To see some info about the compression ratio we can run

Finally we should have these files:

 

 

Question 6 | User, Groups and Sudoers

 

On server app-srv1:

  1. Change the primary group of user user1 to dev and the home directory to /home/accounts/user1

  2. Add a new user user2 with groups dev and op, home directory /home/accounts/user2, terminal /bin/bash

  3. User user2 should be able to execute sudo bash /root/dangerous.sh without having to enter the root password

 

Answer:

Step 1

We can use different approaches. We could:

Or we could edit /etc/passwd manually:

No matter what solution, this should be correct:

And to change the primary group:

 

Step 2

First we can check available options:

Using the correct arguments we create the required new user:

To verify that it was added to the required groups:

 

Step 3

Now it's getting interesting. We can try to execute the script with current configuration:

We need to configure sudoers to allow this specific script call. We should always edit the /etc/sudoers file using the command visudo, because it performs proper syntax validation before saving the file. Any misconfiguration of that file could lock us out of the system for good. So we do as root:

You can exit via Ctrl + X, then Y and then Enter to save.

And to verify:

 

 

Question 7 | Network Packet Filtering

 

Server data-002 is used for big data and provides internally used apis for various data operations. You're asked to implement network packet filters on interface eth0 on data-002:

  1. Port 5000 should be closed

  2. Redirect all traffic on port 6000 to local port 6001

  3. Port 6002 should only be accessible from IP 192.168.10.80 (server data-001)

  4. Block all outgoing traffic to IP 192.168.10.70 (server app-srv1)

 

ℹ️ In case of misconfiguration you can still access the instance using sudo lxc exec data-002 bash

 

Answer:

First we could test the mentioned ports on data-002 from remote:

Further we can check for existing iptables rules and interfaces, because we're asked to implement the filters for eth0:

Above we can see the etc0 interface and that there are no existing iptables rules implemented.

 

Step 1

We're asked to close port 5000 and are going to use iptables for it:

Done, let's give it a test:

 

Step 2

Now we're going to perform some NAT for connections on port 6000:

View existing NAT rules:

If we would like to clear these we could run iptables -F -t nat. Let's test the result:

Above we see the redirect from 6000 to 6001 works.

 

Step 3

Now we're asked to open port 6002 only from a specific source IP:

The idea there is that we first allow that IP and then deny all other traffic on that port. Let's verify it:

 

Step 4

In the final step we need to drop outgoing packages:

Above we can see that outgoing connections to app-srv1 no longer work and they time out.

 

 

Question 8 | Disk Management

 

Your team selected you for this task because of your deep filesystem and disk/devices expertise. Solve the following steps to not let your team down:

  1. Format /dev/vdb with ext4, mount it to /mnt/backup-black and create empty file /mnt/backup-black/completed.

  2. Find which of the two disks, /dev/vdc or /dev/vdd, has higher storage usage.

    Then empty the .trash folder on it.

  3. There are two processes running: dark-matter-v1 and dark-matter-v2.

    Find the one that consumes more memory or virtual memory.

    Then unmount the disk where the process executable is located on.

 

Answer:

A good way to start is probably to list existing disks:

Another way with good list style and information:

Or even another way:

 

Step 1

We go right ahead and format the disk:

Then we mount it at the required location and create the file:

 

Step 2

Now we need to check the used disk space of two disks:

We see that we need to clear up disk space on /mnt/backup001:

The results should be shown:

 

Step 3

For the final step we need to check the memory of two processes and then find out on which disk the largest consumer runs:

It would also be possible to use top -b | grep dark for example.

We see the MEM and VSZ usage and the full paths at the very end. This means process dark-matter-v2 is the offender and the executable is to be found at /mnt/app-4e9d7e1e/dark-matter-v2. So we need to do:

The disk is busy, probably because of that one process! But we can check for any other processes blocking this:

Well, only that one bad process! Hence we can finish this question with:

 

 

Question 9 | Find files with properties and perform actions

 

There is a backup folder on server data-001 at /var/backup/backup-015, it needs to be cleaned up.

First:

  • Delete all files modified before 01/01/2020

Then for the remaining:

  • Find all files smaller than 3KiB and move these to /var/backup/backup-015/small/

  • Find all files larger than 10KiB and move these to /var/backup/backup-015/large/

  • Find all files with permission 777 and move these to /var/backup/backup-015/compromised/

 

Answer:

First we find the backup location:

Seems to contain good amount of files! Now we need to clean it up. A good way for this is to use find with arguments and a command to execute, like:

 

Delete files before date

Using this we can delete all files created before 2020. Always "debug" a command first by just listing without executing a command:

 

Move small files

Now we move all small files into the subfolder:

 

Move large files

Next we move all larger files into the subfolder:

 

Move open permission files

And finally we move all files with too open permissions into the subfolder:

 

Result

 

 

Question 10 | SSHFS and NFS

 

In this task it's required to access remote filesystems over network.

On your main server terminal use SSHFS to mount directory /data-export from server app-srv1 to /app-srv1/data-export. The mount should be read-write and option allow_other should be enabled.

The NFS service has been installed on your main server terminal. Directory /nfs/share should be read-only accessible from 192.168.10.0/24. On app-srv1, mount the NFS share /nfs/share to /nfs/terminal/share.

 

Answer:

 

SSHFS

We go ahead and create the SSHFS mount:

No errors but also no output, we further verify:

Above we can see that creating a new file works and that both directories are synced. Note that all users can now access that mount in read-write mode because of option allow_other, which might not be something we want in production environments!

 

NFS Server

We could start by verifying the service runs without issues:

NFS server seems to be running. We can expose certain directories via /etc/exports:

The file provides some comments with examples which can be very useful. After adding the exports we need to run:

 

NFS Client

Now we check if our NFS server-side settings can actually be accessed on client-side by NFS mounting:

Above we see that we were able to mount the required NFS export in read-only mode on app-srv1.

 

 

Question 11 | Docker Management

 

Someone overheard that you're a Containerisation Specialist, so the following should be easy for you! Please:

  1. Stop the Docker container named frontend_v1

  2. Gather information from Docker container named frontend_v2:

    • Write its assigned ip address into /opt/course/11/ip-address

    • It has one volume mount. Write the volume mount destination directory into /opt/course/11/mount-destination

  1. Start a new detached Docker container:

    • Name: frontend_v3

    • Image: nginx:alpine

    • Memory limit: 30m (30 Megabytes)

    • TCP Port map: 1234/host => 80/container

 

Answer:

Dockerfile: list of commands from which an Image can be build

Image: binary file which includes all data/requirements to be run as a Container

Container: running instance of an Image

Registry: place where we can push/pull Images to/from

 

We first list all Docker containers:

 

Step 1

For the first step we stop the container:

 

Step 2

Docker inspect provides the container configuration in JSON format which contains all information asked for in this task:

It's probably a good idea to search in the inspect output for specific values. For this we could open the output directly in vim:

Now we only have to create the required files with their correct content (your container ip address might differ):

Which results in:

 

Step 3

Finally we can start our own container! Unfortunately with very strict conditions to follow... so let's obey!

The help output for docker run usually provides all that's needed:

Using this we can build the necessary run command:

In case the above command throws iptables errors we can restart the docker service:

Because of the port mapping we should now be able to do:

 

 

Question 12 | Git Workflow

 

You're asked to perform changes in the Git repository of the Auto-Verifier app:

  1. Clone repository /repositories/auto-verifier to /home/candidate/repositories/auto-verifier.

    Perform the following steps in the newly cloned directory

  2. Find the one of the branches dev4, dev5 and dev6 in which file config.yaml contains user_registration_level: open. Merge only that branch into branch main

  3. In branch main create a new directory logs on top repository level. To ensure the directory will be committed create hidden empty file .keep in it

  4. Commit your change with message added log directory

 

Answer:

Step 1: Clone Repository

Git is most often used to clone from and work with remote repositories on like Github or Gitlab. But most of Git functionality can also be used locally. We go ahead and clone the local directory:

Step 2: Find the correct branch

First we list all branches:

We can simply move through all branches and check the file content:

We could also check the commit that actually performed the change:

Another way could also be to compare from branch main without actually switching into another:

Now we simply have to merge branch dev5 into main:

 

Step 3: Create new directory

We're asked to create new directory, let's see what happens if we try to commit it just like that:

Above we see that git status doesn't list the directory at all because it's empty. This means that it wouldn't be included in commits too. Now we add the requested file and see if things change:

That looks better!

 

Step 4: Commit

Always best to confirm what's going to be committed:

 

 

Question 13 | Runtime Security of processes

 

There was a security alert which you need to follow up on. On server web-srv1 there are three processes: collector1, collector2, and collector3. It was alerted that any of these might run periodically the per custom policy forbidden syscall kill.

End the process and remove the executable for those where this is true.

 

ℹ️ You can use strace -p PID

 

Answer:

We should check the server for the mentioned processes:

Now we investigate the kernel syscalls of collector1:

After watching for a while, there don't seem to be any kill syscalls. Next one is collector2:

Gotcha! Seems like collector2 is one bad process. Still we need to check the last one, collector3:

Seems like only collector2 should be terminated. First we run ps again to see the binary path:

 

 

Question 14 | Output redirection

 

On server app-srv1 there is a program /bin/output-generator which, who would've guessed, generates some output. It'll always generate the very same output for every run:

  1. Run it and redirect all stdout into /var/output-generator/1.out

  2. Run it and redirect all stderr into /var/output-generator/2.out

  3. Run it and redirect all stdout and stderr into /var/output-generator/3.out

  4. Run it and write the exit code number into /var/output-generator/4.out

 

Answer:

We go ahead and check the program

 

Investigation

What a mess! Let's try to count the rows:

It looks like wc -l does only count rows that are written to stdout.

We can investigate a bit further using stdout (1) and stderr (2) redirection

Using this we can count all lines:

 

Solution

We redirect as required:

Note that with $? we can access the exit code of the last command run. Which means we need to let output-generator finish properly and not interrupt it. We also can't run another command afterwards or else $? won't contain the output-generator exit code.

This should give us:

 

 

Question 15 | Build and install from source

 

Install the text based terminal browser links2 from source on server app-srv1. The source is provided at /tools/links-2.14.tar.bz2 on that server.

Configure the installation process so that:

  1. The target location of the installed binary will be /usr/bin/links

  2. Support for ipv6 will be disabled

 

Answer:

Let's first check out the provided archive and extract it's content:

We'll go ahead and extract the archive:

The usual process of installing from source is:

  1. ./configure (args...)

  2. make

  3. make install

Here the question requires specific configuration parameters. We can list all possible options:

It's also possible to open the ./configure file in a text editor to investigate possible options if the help output won't suffice.

We now use the following configuration:

After the configuration we continue with make:

Followed by the make install:

This should result in:

We can verify that ipv6 is disabled:

 

 

Question 16 | LoadBalancer

 

Server web-srv1 is hosting two applications, one accessible on port 1111 and one on 2222. These are served using Nginx and it's not allowed to change their config. The ip of web-srv1 is 192.168.10.60.

Create a new HTTP LoadBalancer on that server which:

  • Listens on port 8001 and redirects all traffic to 192.168.10.60:2222/special

  • Listens on port 8000 and balances traffic between 192.168.10.60:1111 and 192.168.10.60:2222 in a Random or Round Robin fashion

Nginx is already preinstalled and is recommended to be used for the implementation. Though it's also possible to use any other technologies (like Apache or HAProxy) because only the end result will be verified.

 

Answer:

First we check if everything works as claimed:

Cool, we can work with that! Here we're now going to create an Nginx LoadBalancer, but it would also be possible to use any other technology if you're more familiar with it.

There we see the two existing applications app1 and app2 which we aren't allowed to be changed. But we sure can use one as template for our new LoadBalancer:

We start slowly with the easier one on port 8001 where we simply use a proxy_pass option. Save the file and:

Working! Now we go ahead and copy the first part, make some additions and use it as the second part:

The second part is mostly the same as before. It's possible to create multiple servers which listen on different ports within the same file. Here we simply created an upstream backend that contains the provided urls and we use it in the proxy_pass directive. Pretty nice right! But does it work?

Above we see that requests to web-srv1:8000 are sent to both app1 and app2. And requests on web-srv1:8001 are only sent to app2 and path /special.

 

 

Question 17 | OpenSSH Configuration

 

You need to perform OpenSSH server configuration changes on data-002. Users marta and cilla exist on that server and can be used for testing. Passwords are their username and shouldn't be changed. Please go ahead and:

  1. Disable X11Forwarding

  2. Disable PasswordAuthentication for everyone but user marta

  3. Enable Banner with file /etc/ssh/sshd-banner for users marta and cilla

     

ℹ️ In case of misconfiguration you can still access the instance using sudo lxc exec data-002 bash

 

Answer:

 

Step 1

We are required to perform ssh server config changes, always fun because nothing can ever go wrong!

We're doing a simple one first:

Save the file and restart the ssh service:

 

Step 2+3

We now need to first disable PasswordAuthentication globally and then enable it for user marta. Then we also add the Banner settings for users marta and cilla:

It's very important to add any Match lines at the very bottom of the config file, otherwise it might not get accepted and errors will be thrown during sshd service restart.

Using Match User or Match Group we can override global settings for specific users and groups. Let's test if it works:

Both users marta and cilla see the banner message, but only marta can still log in using password.

 

 

Question 18 | LVM Storage

 

You're required to perform changes on LVM volumes:

  1. Reduce the volume group vol1 by removing disk /dev/vdh from it

  2. Create a new volume group named vol2 which uses disk /dev/vdh

  3. Create a 50M logical volume named p1 for volume group vol2

  4. Format that new logical volume with ext4

 

Answer:

Some helpful abbreviations when working with LVM, because command names usually start with those:

Start by having a look at PVs:

In the output above we can see that the VG vol1 uses two disks /dev/vdg and /dev/vdh. We can also get an overview over all system disks and their LVM usage:

The existing PV /dev/vda3 with VG ubuntu-vg is created by the main operating system and shouldn't be touched.

 

Step 1

We want to remove disk /dev/vdh from the existing VG vol1:

That should do it. We can also verify this by listing all PVs:

 

Step 2

Now we're going to create a new PV using that now free disk:

 

Step 3

We continue by creating a LV for our new VG:

 

Step 4

We can access LVM partitions or LVs in the usual way once we know the path:

We could now go ahead and mount and use /dev/vol2/p1 as we're used to.

 

Extending a LV

Also interesting and could also be part of the exam is extending a mounted LV:

 

 

Question 19 | Regex, filter out log lines

 

On server web-srv1 there are two log files that need to be worked with:

  1. File /var/log-collector/003/nginx.log: extract all log lines where URLs start with /app/user and that were accessed by browser identity hacker-bot/1.2. Write only those lines into /var/log-collector/003/nginx.log.extracted

  2. File /var/log-collector/003/server.log: replace all lines starting with container.web, ending with 24h and that have the word Running anywhere in-between with: SENSITIVE LINE REMOVED

 

Answer:

First we find the files in the specified location

 

Step 1

To extract all log lines as required we could try some simple grep like:

But this would also catch lines like these which are not asked for:

The lines above shouldn't match because the url is hacker-bot/1.2 and NOT the browser identity.

So we better use a simple regex

It should be 27 lines:

So we write it to the required location:

 

Step 2

Next we shall remove some sensitive logs in server.log. Anything in the pattern of:

In regex this could be:

Let's give this a go:

It should be 44 lines:

To replace these we can use sed using the same regex

This will simply output everything to stdout for us to verify. We can even further check by counting the lines:

Looks fine, 44 lines again. Now we can use sed to replace the actual file. Still, always make a backup!

The 44 we see above is the result we want.

 

 

Question 20 | User and Group limits

 

User Jackie caused an issue with her account on server web-srv1. She did run a program which created too many subprocesses for the server to handle. A coworker of yours already solved it temporarily and limited the number of processes user jackie can run.

For this the coworker added a command into .bashrc in the home directory of jackie. But the command just sets the soft limit and not the hard limit. Jackie's password is brown in case needed.

Configure the number-of-processes limitation as a hard limit for user jackie. Use the same number currently set as a soft limit for that user. Do it in the proper way, not via .bashrc.

On the same server you should enforce that group operators can only ever log in once at the same time, use maxlogins for this.

 

ℹ️ It's not possible to test/verify the maxlogins after configuration due to how the server has been configured

 

 

Answer:

 

Step 1

Well, that's a lot. We check out that user and its limits first:

Using ulimit we can see all configured limits, and there is max user processes set to 1024. We should check how it has been set in .bashrc:

This works, but user jackie could change it herself like this:

First we remove that line that was added as a temporary fix

Next we go ahead and configure it via /etc/security/limits.conf (we need to be root for this). In that file there are already some useful examples on the bottom. We add a new line

Now we can confirm our change:

User jackie is not able any longer to change that limit herself.

 

Step 2

The other ticket was about implementing a limitation for group operators, so we add it to the file as well:

We might not be able to test this setting due to how the server has been configured.